Multi-Armed Bandit for Species Discovery: A Bayesian Nonparametric Approach

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

MULTI–ARMED BANDIT FOR PRICING Multi–Armed Bandit for Pricing

This paper is about the study of Multi–Armed Bandit (MAB) approaches for pricing applications, where a seller needs to identify the selling price for a particular kind of item that maximizes her/his profit without knowing the buyer demand. We propose modifications to the popular Upper Confidence Bound (UCB) bandit algorithm exploiting two peculiarities of pricing applications: 1) as the selling...

متن کامل

Approximation Algorithms for Bayesian Multi-Armed Bandit Problems

In this paper, we consider several finite-horizon Bayesian multi-armed bandit problems with side constraints. These constraints include metric switching costs between arms, delayed feedback about observations, concave reward functions over plays, and explore-then-exploit models. These problems do not have any known optimal (or near optimal) algorithms in sub-exponential running time; several of...

متن کامل

A modern Bayesian look at the multi-armed bandit

A multi-armed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multi-armed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances...

متن کامل

Online Multi-Armed Bandit

We introduce a novel variant of the multi-armed bandit problem, in which bandits are streamed one at a time to the player, and at each point, the player can either choose to pull the current bandit or move on to the next bandit. Once a player has moved on from a bandit, they may never visit it again, which is a crucial difference between our problem and classic multi-armed bandit problems. In t...

متن کامل

Discover Relevant Sources : A Multi-Armed Bandit Approach

Existing work on online learning for decision making takes the information available as a given and focuses solely on choosing the best actions given this information. Instead, in this paper, the decision maker needs to simultaneously learn both what decisions to make and what source(s) of information to consult/gather data from in order to inform its decisions such that its reward is maximized...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of the American Statistical Association

سال: 2018

ISSN: 0162-1459,1537-274X

DOI: 10.1080/01621459.2016.1261711